首页> 外文OA文献 >Deep Networks with Internal Selective Attention through Feedback Connections
【2h】

Deep Networks with Internal Selective Attention through Feedback Connections

机译:通过反馈内部选择性注意的深度网络   连接

摘要

Traditional convolutional neural networks (CNN) are stationary andfeedforward. They neither change their parameters during evaluation nor usefeedback from higher to lower layers. Real brains, however, do. So does ourDeep Attention Selective Network (dasNet) architecture. DasNets feedbackstructure can dynamically alter its convolutional filter sensitivities duringclassification. It harnesses the power of sequential processing to improveclassification performance, by allowing the network to iteratively focus itsinternal attention on some of its convolutional filters. Feedback is trainedthrough direct policy search in a huge million-dimensional parameter space,through scalable natural evolution strategies (SNES). On the CIFAR-10 andCIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.
机译:传统的卷积神经网络(CNN)是平稳的和前馈的。他们既不会在评估期间更改其参数,也不会使用从较高层到较低层的反馈。但是,真正的大脑可以。我们的深度注意力选择性网络(dasNet)架构也是如此。 DasNets反馈结构可以在分类过程中动态更改其卷积滤波器敏感度。它通过允许网络迭代地将内部注意力集中在某些卷积滤波器上,从而利用顺序处理的功能来改善分类性能。通过可扩展的自然演化策略(SNES),在巨大的百万维参数空间中通过直接策略搜索来训练反馈。在CIFAR-10和CIFAR-100数据集上,dasNet的性能优于以前的最新模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号